Adaptive On - Line Learning Algorithms for Blind Separation | Maximum Entropy and Minimum Mutual
نویسندگان
چکیده
There are two major approaches for blind separation: Maximum Entropy (ME) and Minimum Mutual Information (MMI). Both can be implemented by the stochastic gradient descent method for obtaining the de-mixing matrix. The MI is the contrast function for blind separation while the entropy is not. To justify the ME, the relation between ME and MMI is rstly elucidated by calculating the rst derivative of the entropy and proving that 1) the the mean-subtraction is necessary in applying the ME and 2) at the solution points determined by the MI the ME will not update the de-mixing matrix in the directions of increasing the cross-talking. Secondly, the natural gradient instead of the ordinary gradient is introduced to obtain eecient algorithms, because the parameter space is a Riemannian space consisting of matrices. The mutual information is calculated by applying the Gram-Charlier expansion to approximate probability density functions of the outputs. Finally, we propose an eecient learning algorithm which incorporates with an adaptive method of estimating the unknown cumulants. It is shown by computer simulation that the convergence of the stochastic descent algorithms is improved by using the natural gradient and the adaptively estimated cumulants.
منابع مشابه
Adaptive On - Line Learning Algorithms for Blind Separation | Maximum Entropy and Minimum Mutual Information
There are two major approaches for blind separation: Maximum Entropy (ME) and Minimum Mutual Information (MMI). Both can be implemented by the stochastic gradient descent method for obtaining the de-mixing matrix. The MI is the contrast function for blind separation while the entropy is not. To justify the ME, the relation between ME and MMI is rstly elucidated by calculating the rst derivative...
متن کاملAn On-line Adaptation Algorithm for Adaptive System Training with Minimum Error Entropy: Stochastic Information Gradient
We have recently reported on the use of minimum error entropy criterion as an alternative to minimum square error (MSE) in supervised adaptive system training. A nonparametric estimator for Renyi’s entropy was formulated by employing Parzen windowing. This formulation revealed interesting insights about the process of information theoretical learning, namely information potential and informatio...
متن کاملSpeech separation based on the GMM PDF estimation
In this paper, the speech separation task will be regarded as a convolutive mixture Blind Source Separation (BSS) problem. The Maximum Entropy (ME) algorithm, the Minimum Mutual Information (MMI) algorithm and the Maximum Likelihood (ML) algorithm are main approaches of the algorithms solving the BSS problem. The relationship of these three algorithms has been analyzed in this paper. Based on t...
متن کاملInformation Theoretic Learning
of Dissertation Presented to the Graduate School of the University of Florida in Partial Fulfillment of the Requirements for the Degree of Doctor of Philosophy INFORMATION THEORETIC LEARNING: RENYI’S ENTROPY AND ITS APPLICATIONS TO ADAPTIVE SYSTEM TRAINING By Deniz Erdogmus May 2002 Chairman: Dr. Jose C. Principe Major Department: Electrical and Computer Engineering Traditionally, second-order ...
متن کاملResearch of Blind Signals Separation with Genetic Algorithm and Particle Swarm Optimization Based on Mutual Information
Blind source separation technique separates mixed signals blindly without any information on the mixing system. In this paper, we have used two evolutionary algorithms, namely, genetic algorithm and particle swarm optimization for blind source separation. In these techniques a novel fitness function that is based on the mutual information and high order statistics is proposed. In order to evalu...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 1997